Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available January 1, 2026
-
Developing simple, sample-efficient learning algorithms for robust classification is a pressing issue in today's tech-dominated world, and current theoretical techniques requiring exponential sample complexity and complicated improper learning rules fall far from answering the need. In this work we study the fundamental paradigm of (robust) empirical risk minimization (RERM), a simple process in which the learner outputs any hypothesis minimizing its training error. RERM famously fails to robustly learn VC classes (Montasser et al., 2019a), a bound we show extends even to `nice' settings such as (bounded) halfspaces. As such, we study a recent relaxation of the robust model called tolerant robust learning (Ashtiani et al., 2022) where the output classifier is compared to the best achievable error over slightly larger perturbation sets. We show that under geometric niceness conditions, a natural tolerant variant of RERM is indeed sufficient for γ-tolerant robust learning VC classes over ℝd, and requires only Õ (VC(H)dlogDγδϵ2) samples for robustness regions of (maximum) diameter D.more » « less
-
Micro-grids’ operations offer local reliability; in the event of faults or low voltage/frequency events on the utility side, micro-grids can disconnect from the main grid and operate autonomously while providing a continued supply of power to local customers. With the ever-increasing penetration of renewable generation, however, operations of micro-grids become increasingly complicated because of the associated fluctuations of voltages. As a result, transformer taps are adjusted frequently, thereby leading to fast degradation of expensive tap-changer transformers. In the islanding mode, the difficulties also come from the drop in voltage and frequency upon disconnecting from the main grid. To appropriately model the above, non-linear AC power flow constraints are necessary. Computationally, the discrete nature of tap-changer operations and the stochasticity caused by renewables add two layers of difficulty on top of a complicated AC-OPF problem. To resolve the above computational difficulties, the main principles of the recently developed “l1-proximal” Surrogate Lagrangian Relaxation are extended. Testing results based on the nine-bus system demonstrate the efficiency of the method to obtain the exact feasible solutions for micro-grid operations, thereby avoiding approximations inherent to existing methods; in particular, fast convergence of the method to feasible solutions is demonstrated. It is also demonstrated that through the optimization, the number of tap changes is drastically reduced, and the method is capable of efficiently handling networks with meshed topologies.more » « less
-
null (Ed.)Learning invariant representations is a critical first step in a number of machine learning tasks. A common approach is given by the so-called information bottleneck principle in which an application dependent function of mutual information is carefully chosen and optimized. Unfortunately, in practice, these functions are not suitable for optimization purposes since these losses are agnostic of the metric structure of the parameters of the model. In our paper, we introduce a class of losses for learning representations that are invariant to some extraneous variable of interest by inverting the class of contrastive losses, i.e., inverse contrastive loss (ICL). We show that if the extraneous variable is binary, then optimizing ICL is equivalent to optimizing a regularized MMD divergence. More generally, we also show that if we are provided a metric on the sample space, our formulation of ICL can be decomposed into a sum of convex functions of the given distance metric. Our experimental results indicate that models obtained by optimizing ICL achieve significantly better invariance to the extraneous variable for a fixed desired level of accuracy. In a variety of experimental settings, we show applicability of ICL for learning invariant representations for both continuous and discrete protected/extraneous variables. The project page with code is available at https://github.com/adityakumarakash/ICLmore » « less
An official website of the United States government

Full Text Available